A brief survey of techniques for learning distributed representations
نویسنده
چکیده
One of the basic problems with distributed representations is coming up with the codes for the particular objects to be represented. A distributed representation that is useful for solving a particular problem is usually a re-representation of the raw input (possibly sensory data). It represents, in a salient manner, the semantic features of the input (i.e., features that are relevant to solving the problem). The raw inputs to a neural network often carry few or no clues, in terms of simple dot-product similarity of activity in the input units, as to which inputs should be treated as similar or identical, and which should be treated as different. For a feedforward neural network such as the one in Figure 1 to solve such a problem in a parsimonious fashion the inputs must be re-represented, in the hidden layers, in a manner such that semantically similar, but neurally dissimilar, inputs evoke neurally similar activation patterns in hidden layers. E.g., if input patterns A and B are neurally dissimilar (i.e., have low dotproduct similarity) but should produce the same output from the network, then one way for a network to do that is to have the weights entering the hidden layer transform the neurally dissimilar input layer patterns A and B into the neurally similar hidden-layer patterns A and B. The techniques used in the literature for developing distributed representations can be classified into five broad categories: (1) random; (2) hand constructed; (3) learned in a supervised neural network from examples of inputoutput mappings; and (4) learned in an unsupervised neural network from examples of typical inputs (with no outputs presented); and (5) derived mathematically from summary statistics of data. Unfortunately, there have not been any successful attempts to systematically learn, on a large scale, distributed representations for objects with complex structure.
منابع مشابه
Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey
Natural language and symbols are intimately correlated. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and symbols, being the firs...
متن کاملVocabulary Teaching Techniques: A Review of Common Practices
Abundantly clear though the need for effective eclectic techniques for enhancing learners’ vocabulary learning strategies may seem, in practice, language instructors, by all accounts, tend to resort to only a few obsolete ones. This review paper aims to provide a brief account of practices in vocabulary teaching and learning by focusing on the research on teaching words in context and out...
متن کاملImage Classification via Sparse Representation and Subspace Alignment
Image representation is a crucial problem in image processing where there exist many low-level representations of image, i.e., SIFT, HOG and so on. But there is a missing link across low-level and high-level semantic representations. In fact, traditional machine learning approaches, e.g., non-negative matrix factorization, sparse representation and principle component analysis are employed to d...
متن کاملLanguage learning and language culture in a changing world
To communicate effectively, learners have to become proficient in both the language and the culture of the target language. Being aware of socio-cultural frameworks does not mean that as an outcome of instruction learners have to become "native-like," but an awareness of L2 cultural norms can allow learners to make their own informed choices of how to become competent and astute l...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003